Deep learning has attained remarkable success in many 3D visual recognition tasks, including shape classification, object detection, and semantic segmentation. However, many of these results rely on manually collecting densely annotated real-world 3D data, which is highly time-consuming and expensive to obtain, limiting the scalability of 3D recognition tasks. Thus, we study unsupervised 3D recognition and propose a Self-supervised-Self-Labeled 3D Recognition (SL3D) framework. SL3D simultaneously solves two coupled objectives, i.e., clustering and learning feature representation to generate pseudo-labeled data for unsupervised 3D recognition. SL3D is a generic framework and can be applied to solve different 3D recognition tasks, including classification, object detection, and semantic segmentation. Extensive experiments demonstrate its effectiveness. Code is available at https://github.com/fcendra/sl3d.
translated by 谷歌翻译
在本文中,我们从经验上研究了如何充分利用低分辨率框架以进行有效的视频识别。现有方法主要集中于开发紧凑的网络或减轻视频输入的时间冗余以提高效率,而压缩框架分辨率很少被认为是有希望的解决方案。一个主要问题是低分辨率帧的识别准确性不佳。因此,我们首先分析低分辨率帧上性能降解的根本原因。我们的主要发现是,降级的主要原因不是在下采样过程中的信息丢失,而是网络体系结构和输入量表之间的不匹配。通过知识蒸馏(KD)的成功,我们建议通过跨分辨率KD(RESKD)弥合网络和输入大小之间的差距。我们的工作表明,RESKD是一种简单但有效的方法,可以提高低分辨率帧的识别精度。没有铃铛和哨子,RESKD在四个大规模基准数据集(即ActivityNet,FCVID,Mini-Kinetics,sopeings soseings ossings v2)上,就效率和准确性上的所有竞争方法都大大超过了所有竞争方法。此外,我们广泛地展示了其对最先进的体系结构(即3D-CNN和视频变压器)的有效性,以及对超低分辨率帧的可扩展性。结果表明,RESKD可以作为最先进视频识别的一般推理加速方法。我们的代码将在https://github.com/cvmi-lab/reskd上找到。
translated by 谷歌翻译
随着移动设备的快速开发,现代使用的手机通常允许用户捕获4K分辨率(即超高定义)图像。然而,对于图像进行示范,在低级视觉中,一项艰巨的任务,现有作品通常是在低分辨率或合成图像上进行的。因此,这些方法对4K分辨率图像的有效性仍然未知。在本文中,我们探索了Moire模式的删除,以进行超高定义图像。为此,我们提出了第一个超高定义的演示数据集(UHDM),其中包含5,000个现实世界4K分辨率图像对,并对当前最新方法进行基准研究。此外,我们提出了一个有效的基线模型ESDNET来解决4K Moire图像,其中我们构建了一个语义对准的比例感知模块来解决Moire模式的尺度变化。广泛的实验表明了我们的方法的有效性,这可以超过最轻巧的优于最先进的方法。代码和数据集可在https://xinyu-andy.github.io/uhdm-page上找到。
translated by 谷歌翻译
随着深度学习和智能车辆的兴起,智能助手已成为促进驾驶和提供额外功能的基本内部组件。汽车智能助理应该能够处理一般的和与汽车有关的命令,并执行相应的操作,减轻驾驶和提高安全性。但是,对于低资源语言存在数据稀缺问题,妨碍了研究和应用的发展。在本文中,我们介绍了一个新的DataSet,粤式视听语音识别(CI-AVSR),用于粤语中的车载命令识别,具有视频和音频数据。它由令人宣传的30个粤语发言者记录的200个车载命令的4,984个样本(8.3小时)组成。此外,我们使用常见的内部内部背景噪声增强我们的数据集来模拟真实环境,产生比收集的数据集大10倍。我们提供我们数据集的清洁和增强版本的详细统计信息。此外,我们实施了两个多模式基线以证明CI-AVSR的有效性。实验结果表明,利用视觉信号提高了模型的整体性能。虽然我们的最佳模型可以在清洁测试集上实现相当大的质量,但嘈杂数据的语音识别质量仍然是较差的,并且仍然是真正的车载语音识别系统的极其具有挑战性的任务。数据集和代码将在https://github.com/hltchkust/ci-avsr发布。
translated by 谷歌翻译
低资源语言的自动语音识别(ASR)改善了语言少数群体的访问,以便人工智能(AI)提供的技术优势。在本文中,我们通过创建一个新的粤语数据集来解决香港广东语言的数据稀缺问题。我们的数据集多域粤语语料库(MDCC)由73.6小时的清洁阅读语音与成绩单配对,从香港的粤语有声读物收集。它结合了哲学,政治,教育,文化,生活方式和家庭领域,涵盖了广泛的主题。我们还查看所有现有的粤语数据集,并在两个最大的数据集(MDCC和公共语音ZH-HK)上执行实验。我们根据其语音类型,数据源,总大小和可用性分析现有数据集。使用Fairseq S2T变压器,最先进的ASR模型进行实验结果,显示了我们数据集的有效性。此外,我们通过在MDCC和常见的声音ZH-HK上应用多数据集学习来创建一个强大而强大的粤语ASR模型。
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
We propose a sparse end-to-end multi-person pose regression framework, termed QueryPose, which can directly predict multi-person keypoint sequences from the input image. The existing end-to-end methods rely on dense representations to preserve the spatial detail and structure for precise keypoint localization. However, the dense paradigm introduces complex and redundant post-processes during inference. In our framework, each human instance is encoded by several learnable spatial-aware part-level queries associated with an instance-level query. First, we propose the Spatial Part Embedding Generation Module (SPEGM) that considers the local spatial attention mechanism to generate several spatial-sensitive part embeddings, which contain spatial details and structural information for enhancing the part-level queries. Second, we introduce the Selective Iteration Module (SIM) to adaptively update the sparse part-level queries via the generated spatial-sensitive part embeddings stage-by-stage. Based on the two proposed modules, the part-level queries are able to fully encode the spatial details and structural information for precise keypoint regression. With the bipartite matching, QueryPose avoids the hand-designed post-processes and surpasses the existing dense end-to-end methods with 73.6 AP on MS COCO mini-val set and 72.7 AP on CrowdPose test set. Code is available at https://github.com/buptxyb666/QueryPose.
translated by 谷歌翻译
Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. With this representation, we can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism which allows us to obtain multiple quantized networks from one full precision source model by progressively mapping the higher precision weights to their adjacent lower precision counterparts. Then, with networks of different bit-widths from one source model, multi-objective optimization is employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. By doing this, the shared weights will be optimized to balance the performance of different quantized models, thus making the weights transferable among different bit widths. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width. Code will be available.
translated by 谷歌翻译
Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated label space. The recent breakthrough of 2D open-vocabulary perception is largely driven by Internet-scale paired image-text data with rich vocabulary concepts. However, this success cannot be directly transferred to 3D scenarios due to the inaccessibility of large-scale 3D-text pairs. To this end, we propose to distill knowledge encoded in pre-trained vision-language (VL) foundation models through captioning multi-view images from 3D, which allows explicitly associating 3D and semantic-rich captions. Further, to facilitate coarse-to-fine visual-semantic representation learning from captions, we design hierarchical 3D-caption pairs, leveraging geometric constraints between 3D scenes and multi-view images. Finally, by employing contrastive learning, the model learns language-aware embeddings that connect 3D and text for open-vocabulary tasks. Our method not only remarkably outperforms baseline methods by 25.8% $\sim$ 44.7% hIoU and 14.5% $\sim$ 50.4% hAP$_{50}$ on open-vocabulary semantic and instance segmentation, but also shows robust transferability on challenging zero-shot domain transfer tasks. Code will be available at https://github.com/CVMI-Lab/PLA.
translated by 谷歌翻译
Weakly supervised detection of anomalies in surveillance videos is a challenging task. Going beyond existing works that have deficient capabilities to localize anomalies in long videos, we propose a novel glance and focus network to effectively integrate spatial-temporal information for accurate anomaly detection. In addition, we empirically found that existing approaches that use feature magnitudes to represent the degree of anomalies typically ignore the effects of scene variations, and hence result in sub-optimal performance due to the inconsistency of feature magnitudes across scenes. To address this issue, we propose the Feature Amplification Mechanism and a Magnitude Contrastive Loss to enhance the discriminativeness of feature magnitudes for detecting anomalies. Experimental results on two large-scale benchmarks UCF-Crime and XD-Violence manifest that our method outperforms state-of-the-art approaches.
translated by 谷歌翻译